Categorical vs. Dimensional Representations in Multimodal Affect Detection during Learning

نویسندگان

  • M. Sazzad Hussain
  • Hamed Monkaresi
  • Rafael A. Calvo
چکیده

Learners experience a variety of emotions during learning sessions with Intelligent Tutoring Systems (ITS). The research community is building systems that are aware of these experiences, generally represented as a category or as a point in a low-dimensional space. State-of-the-art systems detect these affective states from multimodal data, in naturalistic scenarios. This paper provides evidence of how the choice of representation affects the quality of the detection system. We present a user-independent model for detecting learners’ affective states from video and physiological signals using both the categorical and dimensional representations. Machine learning techniques are used for selecting the best subset of features and classifying the various degrees of emotions for both representations. We provide evidence that dimensional representation, particularly using valence, produces higher accuracy.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Learning Representations of Affect from Speech

There has been a lot of prior work on representation learning for speech recognition applications, but not much emphasis has been given to an investigation of effective representations of affect from speech, where the paralinguistic elements of speech are separated out from the verbal content. In this paper, we explore denoising autoencoders for learning paralinguistic attributes, i.e. categori...

متن کامل

BNU-LSVED 2.0: Spontaneous multimodal student affect database with multi-dimensional labels

In college classrooms, large quantities of digital-media data showing students’ affective behaviors are continuously captured by cameras on a daily basis. To provide a bench mark for affect recognition using these big data collections, in this paper we propose the first large-scale spontaneous and multimodal student affect database. All videos in our database were selected from daily big data r...

متن کامل

Multimodal Sparse Coding for Event Detection

Unsupervised feature learning methods have proven effective for classification tasks based on a single modality. We present multimodal sparse coding for learning feature representations shared across multiple modalities. The shared representations are applied to multimedia event detection (MED) and evaluated in comparison to unimodal counterparts, as well as other feature learning methods such ...

متن کامل

Learning Deep Representations, Embeddings and Codes from the Pixel Level of Natural and Medical Images

Significant research has gone into engineering representations that can identify high-level semantic structure in images, such as objects, people, events and scenes. Recently there has been a shift towards learning representations of images either on top of dense features or directly from the pixel level. These features are often learned in hierarchies using large amounts of unlabeled data with...

متن کامل

Multimodal Affect Detection from Physiological and Facial Features during ITS Interaction

Multimodal approaches are increasingly used for affect detection. This paper proposes a model for the fusion of physiological signal that measure learners’ heart activity and their facial expressions to detect learners’ affective states while students interact with an Intelligent Tutoring System (ITS). It studies machine learning and fusion techniques that classify the system’s automated feedba...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2012